# Dynamic precision allocation

Dans PersonalityEngine V1.3.0 24b GGUF
Apache-2.0
Dans-PersonalityEngine-V1.3.0-24b is a multi-functional model series that has been fine-tuned on more than 50 professional datasets and supports multilingual and professional domain tasks.
Large Language Model Transformers
D
Mungert
678
2
Josiefied Qwen3 8B Abliterated V1 GGUF
Quantized version of Qwen3-8B, utilizing IQ-DynamicGate ultra-low bit quantization technology to optimize memory efficiency and inference speed
Large Language Model
J
Mungert
559
1
Qwen3 30B A3B GGUF
Apache-2.0
Qwen3-30B-A3B is a large language model based on Qwen3-30B-A3B-Base, supporting text generation tasks, optimized for memory efficiency with ultra-low-bit quantization technology.
Large Language Model
Q
Mungert
2,135
1
Qwen3 14B GGUF
Apache-2.0
Qwen3-14B is a GGUF format model generated from Qwen/Qwen3-14B-Base, supporting text generation tasks and optimized for memory efficiency using IQ-DynamicGate ultra-low-bit quantization technology.
Large Language Model
Q
Mungert
1,597
6
GLM Z1 9B 0414 GGUF
MIT
GLM-Z1-9B-0414 is a bilingual text generation model supporting both Chinese and English, utilizing the GGUF format and suitable for various quantization levels, from BF16 to ultra-low-bit quantization (1-2 bits).
Large Language Model Supports Multiple Languages
G
Mungert
1,598
3
Olympiccoder 7B GGUF
Apache-2.0
OlympicCoder-7B is a code generation model optimized based on Qwen2.5-Coder-7B-Instruct. It uses the IQ-DynamicGate ultra-low bit quantization technology and is designed for memory-constrained environments.
Large Language Model English
O
Mungert
849
3
GLM 4 32B 0414 GGUF
MIT
The GLM-4-32B-0414 GGUF model is a series of powerful text generation models with various quantization formats, suitable for different hardware and memory conditions.
Large Language Model Transformers Supports Multiple Languages
G
Mungert
817
4
Granite 3.3 8b Instruct GGUF
Apache-2.0
Ultra-low-bit quantization (1-2 bits) language model using IQ-DynamicGate technology, suitable for memory-constrained environments
Large Language Model
G
Mungert
759
2
Qwq 32B GGUF
Apache-2.0
Ultra-low-bit quantization (1-2 bit) large language model using IQ-DynamicGate technology, supporting multilingual text generation tasks
Large Language Model English
Q
Mungert
5,770
17
Orpheus 3b 0.1 Ft GGUF
Apache-2.0
An ultra-low bit quantized model optimized based on the Llama-3-8B architecture, utilizing IQ-DynamicGate technology for adaptive 1-2 bit precision quantization, suitable for memory-constrained environments.
Large Language Model English
O
Mungert
1,427
1
Llama 3.1 70B Instruct GGUF
An ultra-low-bit (1-2 bit) quantized model based on Llama-3.1-70B, utilizing IQ-DynamicGate technology for adaptive precision quantization, enhancing accuracy while maintaining memory efficiency.
Large Language Model Supports Multiple Languages
L
Mungert
19.52k
3
Olympiccoder 32B GGUF
Apache-2.0
OlympicCoder-32B is a code generation model based on Qwen2.5-Coder-32B-Instruct, employing IQ-DynamicGate ultra-low-bit quantization technology for efficient inference in memory-constrained environments.
Large Language Model English
O
Mungert
361
3
Gemma 3 27b It GGUF
GGUF quantized version of Gemma 3 with 27B parameters, supporting image-text interaction tasks
Text-to-Image
G
Mungert
4,034
6
EXAONE Deep 32B GGUF
Other
EXAONE-Deep-32B is a 32B-parameter large language model supporting English and Korean, specifically designed for text generation tasks.
Large Language Model Supports Multiple Languages
E
Mungert
2,249
3
Llama 3.1 Nemotron Nano 8B V1 GGUF
Other
An 8B parameter model based on the Llama-3 architecture, optimized for memory usage with IQ-DynamicGate ultra-low bit quantization technology
Large Language Model English
L
Mungert
2,088
4
EXAONE Deep 7.8B GGUF
Other
A 7.8B-parameter model featuring ultra-low-bit quantization (1-2 bits) using IQ-DynamicGate technology, supporting English and Korean text generation tasks.
Large Language Model Supports Multiple Languages
E
Mungert
1,791
5
Llama 3.1 8B Instruct GGUF
Llama-3.1-8B-Instruct is an instruction-tuned version based on Llama-3-8B, utilizing IQ-DynamicGate technology for ultra-low-bit quantization (1-2 bits), enhancing accuracy while maintaining memory efficiency.
Large Language Model Supports Multiple Languages
L
Mungert
1,073
3
Mistral 7B Instruct V0.2 GGUF
Apache-2.0
Mistral-7B-Instruct-v0.2 is an instruction-tuned model based on the Mistral-7B architecture, supporting text generation tasks, optimized for memory efficiency using IQ-DynamicGate ultra-low bit quantization technology.
Large Language Model
M
Mungert
742
2
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase